Goto

Collaborating Authors

 diverse image captioning


Variational Structured Semantic Inference for Diverse Image Captioning

Neural Information Processing Systems

Despite the exciting progress in image captioning, generating diverse captions for a given image remains as an open problem. Existing methods typically apply generative models such as Variational Auto-Encoder to diversify the captions, which however neglect two key factors of diverse expression, i.e., the lexical diversity and the syntactic diversity. To model these two inherent diversities in image captioning, we propose a Variational Structured Semantic Inferring model (termed VSSI-cap) executed in a novel structured encoder-inferer-decoder schema. VSSI-cap mainly innovates in a novel structure, i.e., Variational Multi-modal Inferring tree (termed VarMI-tree). In particular, conditioned on the visual-textual features from the encoder, the VarMI-tree models the lexical and syntactic diversities by inferring their latent variables (with variations) in an approximate posterior inference guided by a visual semantic prior. Then, a reconstruction loss and the posterior-prior KL-divergence are jointly estimated to optimize the VSSI-cap model. Finally, diverse captions are generated upon the visual features and the latent variables from this structured encoder-inferer-decoder model. Experiments on the benchmark dataset show that the proposed VSSI-cap achieves significant improvements over the state-of-the-arts.


Diverse Image Captioning with Context-Object Split Latent Spaces

Neural Information Processing Systems

Diverse image captioning models aim to learn one-to-many mappings that are innate to cross-domain datasets, such as of images and texts. Current methods for this task are based on generative latent variable models, eg. VAEs with structured latent spaces. Yet, the amount of multimodality captured by prior work is limited to that of the paired training data -- the true diversity of the underlying generative process is not fully captured. To address this limitation, we leverage the contextual descriptions in the dataset that explain similar contexts in different visual scenes. To this end, we introduce a novel factorization of the latent space, termed context-object split, to model diversity in contextual descriptions across images and texts within the dataset. Our framework not only enables diverse captioning through context-based pseudo supervision, but extends this to images with novel objects and without paired captions in the training data. We evaluate our COS-CVAE approach on the standard COCO dataset and on the held-out COCO dataset consisting of images with novel objects, showing significant gains in accuracy and diversity.


Diverse Image Captioning with Context Object Split Latent Spaces

Neural Information Processing Systems

The word dimension for the embedding layer is 300. In Tab. 7 we further evaluate the diversity of COS-CVAE using self-CIDEr We provide additional qualitative results in Tabs. In Tab. 12 we show the divserse captions for novel objects generated by our model and the regions The evaluation server for nocaps accepts only one caption per image and does not support methods modeling one-to-many relationships for images and captions. In Figure 1 (left) we show the average accuracy and diversity scores again averaged across annotators; in Figure 1 (right) we show the accuracy and diversity scores from each annotator. We find that the captions generated by the COS-CV AE are scored to be more accurate compared to COS-CV AE (paired).


RONA: Pragmatically Diverse Image Captioning with Coherence Relations

Ramakrishnan, Aashish Anantha, Ramakrishnan, Aadarsh Anantha, Lee, Dongwon

arXiv.org Artificial Intelligence

Writing Assistants (e.g., Grammarly, Microsoft Copilot) traditionally generate diverse image captions by employing syntactic and semantic variations to describe image components. However, human-written captions prioritize conveying a central message alongside visual descriptions using pragmatic cues. To enhance pragmatic diversity, it is essential to explore alternative ways of communicating these messages in conjunction with visual content. To address this challenge, we propose RONA, a novel prompting strategy for Multi-modal Large Language Models (MLLM) that leverages Coherence Relations as an axis for variation. We demonstrate that RONA generates captions with better overall diversity and ground-truth alignment, compared to MLLM baselines across multiple domains. Our code is available at: https://github.com/aashish2000/RONA


Reviews: Variational Structured Semantic Inference for Diverse Image Captioning

Neural Information Processing Systems

Originality: The proposed approach using syntactic and lexical diversity modelling within the latent space to generate diverse image captions is novel. Quality: To establish that the generated captions are diverse, various standard diversity metrics are measured for the proposed method in Tab. 2. Some qualitative results demonstrating diverse captions and diversity conditioned on different visual parse tree probabilities is shown in Figure 1 and 6. These experiments help justify the core components of the proposed approach. Clarity: The paper is well written and easy to follow. Careful illustrations in Figure 1 and 3 are used as an aid while describing the proposed method.


Reviews: Variational Structured Semantic Inference for Diverse Image Captioning

Neural Information Processing Systems

After considering the author response and discussing the submission, the reviewers all voted to accept the submission. The approach presented puts forward a novel framing for caption diversity and the empirical evaluation supports the paper's contributions. The document as a whole could use additional clarity so I urge authors to spend time revising it to broaden the impact of this work.


Review for NeurIPS paper: Diverse Image Captioning with Context-Object Split Latent Spaces

Neural Information Processing Systems

Weaknesses: My main issues are with some of the evaluations in the paper: 1. Oracle accuracy is a bit of a cheat as it scores all proposed sentences and selects the top scoring sentence. I notice that consensus re-ranking is also reported in the supplemental. The results on this are good in comparison to prior work, so I am not sure why this is not mentioned in the paper (or if the results could be squeezed into the paper by rearranging Table 3). However, even the consensus based ranking is a bit odd since it relies on finding nearest neighbor train images (how are nearest neighbors found? Stronger networks will do a better job won't they?).


Review for NeurIPS paper: Diverse Image Captioning with Context-Object Split Latent Spaces

Neural Information Processing Systems

All reviewers recommend accept (one indicated it only in the discussion but did not update their score). The reviewers appreciate the author response and value the paper for its contributions including - the problem addressed - the idea and method to split context and objects - the extensive evaluation I agree with this evaluation and accept, however, I expect the authors to include the clarifications and improvements suggested by the reviewers and made in the author response. I also encourage the authors to include the results on nocaps as suggested by R4.


Variational Structured Semantic Inference for Diverse Image Captioning

Neural Information Processing Systems

Despite the exciting progress in image captioning, generating diverse captions for a given image remains as an open problem. Existing methods typically apply generative models such as Variational Auto-Encoder to diversify the captions, which however neglect two key factors of diverse expression, i.e., the lexical diversity and the syntactic diversity. To model these two inherent diversities in image captioning, we propose a Variational Structured Semantic Inferring model (termed VSSI-cap) executed in a novel structured encoder-inferer-decoder schema. VSSI-cap mainly innovates in a novel structure, i.e., Variational Multi-modal Inferring tree (termed VarMI-tree). In particular, conditioned on the visual-textual features from the encoder, the VarMI-tree models the lexical and syntactic diversities by inferring their latent variables (with variations) in an approximate posterior inference guided by a visual semantic prior.


Diverse Image Captioning with Context-Object Split Latent Spaces

Neural Information Processing Systems

Diverse image captioning models aim to learn one-to-many mappings that are innate to cross-domain datasets, such as of images and texts. Current methods for this task are based on generative latent variable models, eg. VAEs with structured latent spaces. Yet, the amount of multimodality captured by prior work is limited to that of the paired training data -- the true diversity of the underlying generative process is not fully captured. To address this limitation, we leverage the contextual descriptions in the dataset that explain similar contexts in different visual scenes. To this end, we introduce a novel factorization of the latent space, termed context-object split, to model diversity in contextual descriptions across images and texts within the dataset.